Inferring Competencies 2 Easing the Inferential Leap in Competency Modeling: The Effects of Task-Related Information and Subject Matter Expertise

نویسندگان

  • Filip Lievens
  • Juan I. Sanchez
  • Wilfried De Corte
چکیده

Despite the rising popularity of the practice of competency modeling, research on competency modeling has lagged behind. The present study begins to close this practicescience gap through three studies (one lab study and two field studies), which employ generalizability analysis to shed light on (a) the quality of inferences made in competency modeling and (b) the effects of incorporating elements of traditional job analysis into competency modeling to raise the quality of competency inferences. Study 1 showed that competency modeling resulted in poor inter-rater reliability and poor between-job discriminant validity amongst inexperienced raters. In contrast, Study 2 suggested that the quality of competency inferences was higher among a variety of job experts in a real organization. Finally, Study 3 showed that blending competency modeling efforts and taskrelated information increased both inter-rater reliability among SMEs and their ability to discriminate among jobs. In general, this set of results highlights that the inferences made in competency modeling should not be taken for granted, and that practitioners can improve competency modeling efforts by incorporating some of the methodological rigor inherent in job analysis. Inferring Competencies 4 Easing the Inferential Leap in Competency Modeling: The Effects of Task-Related Information and Subject Matter Expertise In recent years, the practice of competency modeling has made rapid inroads in organizations (Lucia & Lepsinger, 1999; Schippmann, 1999). In contrast to traditional job analysis, competency modeling ties the derivation of job specifications to the organization’s strategy which, together with non-strategic job requirements, are used to generate a “common language” in the form of a set of human attributes or individual competencies. This same set of competencies usually serves as a platform for various HR practices such as performance evaluation, compensation, selection, and training (Schippmann et al., 2000). The calls for strategic alignment of HR practices make competency modeling a timely mechanism to build the organization’s strategy into all HR practices (Becker, Huselid, & Ulrich, 2001). An inspection of the ABI/INFORM database attests to the impetus of competency modeling, with over 500 articles on this topic published between 1995 and 2003, in sharp contrast to the 87 articles published between 1985 and 1995. Contrary to the flourishing popularity of competency modeling among practitioners, the scientific community has regarded competency modeling with some degree of skepticism. The validity of “competencies” as measurable constructs appears to be at the core of this controversy (Barrett & Callahan, 1997; Barrett & Depinet, 1991; Lawler, 1996; Pearlman, 1997). Specifically, the process of deriving competencies requires a rather large inferential leap because competency modeling often fails to focus on detailed task statements prior to inferring competencies (Schippmann et al., 2000). In the U.S., this aspect of competency modeling appears to be problematic in light of current, quasi-legal standards such as the Uniform Guidelines on Employee Selection Procedures (1978), which require demonstrable linkages between job specifications such as knowledge, skills, abilities, and other requirements (KSAOs) on the one hand and important job behaviors on the other hand. In the absence of these linkages to important job behaviors, Sanchez and Levine (2001) observed that “making and justifying inferential leaps on the slippery floor of behaviorally-fuzzy competencies is certainly a methodological challenge” (p. 85). Inferring Competencies 5 To date, there is a paucity of empirical studies actually scrutinizing the quality of the inferences required in competency modeling. Even more important, virtually no studies have compared competency modeling to more traditional job analysis approaches (for an exception, see Morgeson, Delaney-Klinger, Mayfield, Ferrara, & Campion, in press). Therefore, it is still unclear whether and how the assumed lack of task information inherent in competency modeling detracts from the quality of its inferences. The dearth of research on this issue is surprising because the quality of the inferences made in competency modeling has not only legal ramifications regarding possible violations of sound measurement procedures, but also practical consequences such as HR practices that fail to truly leverage the organization’s human resource capital (Becker et al., 2001). We present three studies that begin to close this practice-science gap by examining the quality of inferences drawn in competency modeling. Specifically, we assessed the quality of inferences made in competency modeling and whether elements of traditional job analysis (i.e., task-related information and subject matter expertise) can be fruitfully incorporated into competency modeling to enhance the quality of such inferences. We operationalized the level of “quality” in terms of both inter-rater reliability and discriminability among jobs and/or competencies. These criteria are important because they reflect underlying issues of reliability and discriminant validity in work analysis data (Dierdorff & Wilson, 2003; Morgeson & Campion, 1997). Study Background The traditional task analysis approach to job analysis provides an indirect estimation of Knowledge, Skills, Abilities, and Other characteristics (KSAOs) (Gatewood & Field, 2001, pp. 367-380; Morgeson & Campion, 2000). That is, the complex inferential leap from the job to the specification of KSAOs (Cornelius & Lyness, 1980; Morgeson & Campion, 1997) is broken down into a series of more manageable steps. First, the various job tasks are identified. Next, Subject Matter Experts (SMEs) are asked to judge the importance or criticality of these tasks. Finally, given these tasks, SMEs make inferences about which KSAOs are most important. The methodological rigor of this step-by-step approach lends Inferring Competencies 6 credence to the SMEs’ inferences. Although the widely employed task analysis approach relies on this indirect estimation method, it should be acknowledged that job-analytic approaches vary widely in the extent to which they focus on job tasks and other descriptors (e.g., Christal, 1974; Fine, 1988; Lopez, Kesselman, & Lopez, 1981; Prien, Prien, & Gamble, 2004). Other forms of work analysis involve directly estimating KSAOs by asking SMEs to rate the importance of various KSAOs for a given job. In this direct estimation method, the intermediate step of specifying tasks is not an explicit requirement. This kind of holistic KSAO judgment calls for a larger inferential leap than asking SMEs to infer KSAOs from specific task statements. An example of direct estimation is the job element method (Primoff, 1975). How are the focal worker attributes for a given job determined in competency modeling? Schippmann et al. (2000) tried to delineate the major characteristics of competency modeling by prompting the opinion of 37 work analysis experts and authority figures. As concluded by Schippmann et al., competency modeling is less rigorous than traditional, task-based job analysis. In most competency modeling approaches, only one type of descriptor information (i.e., competencies) is gathered, and SMEs are not provided with detailed task statements prior to making inferences about which competencies are important. Instead, similar to a direct estimation method, competencies are inferred from just a broad job description plus information about the organization’s strategy. Although the provision of strategic information might impose a common frame-of-reference on SMEs thereby enhancing their inter-rater reliability (cf. Sulsky & Day, 1994), Schippmann et al. concluded that the reduced methodological rigor (i.e., the absence of detailed task statements) of most competency modeling approaches increases the difficulty of the inferences required from SMEs. Indeed, various empirical studies in the traditional job analysis literature found that raters are less capable to make reliable judgments for the entire job than for narrower descriptors such as task statements (Butler & Harvey, 1988; Dierdorff & Wilson, 2003; Inferring Competencies 7 Sanchez & Levine, 1989, 1994). Hughes and Prien (1989) showed that even when SMEs were given task statements, a relatively large inferential leap was still required, as evidenced by the moderate inter-rater agreement reported. Finally, Morgeson et al. (in press) found that global judgments similar to those made in competency modeling were more inflated than task-level judgments. From a theoretical point of view, the derivation of worker attributes required for job performance can be conceptualized as an inferential decision, in which job events need to be recalled and then reduced to a set of dimensions (Holland, Holyoak, Nisbett, & Thagard, 1986). Clearly, the amount of information that needs to be recalled and integrated into a set of job-level competencies exceeds that required to make similar judgments at the narrower task level. Apart from judgment theory, categorization theory (Srull & Wyer, 1980) also predicts more biases for holistic judgments than for task-based judgments. The reason is that experts might make judgments on the basis of what they think the job involves (a category) instead of on the basis of factual tasks. In short, empirical evidence and theoretical arguments suggest that the quality of inferences is higher when task-based information is available. The SMEs in Schippmann et al.’s (2000) study hinted that the future of work analysis might consist of blending the two approaches previously discussed (i.e., task analysis and competency modeling). This blended approach represents an effort to incorporate not only the organization’s strategy into the derivation of broad worker attributes or “competencies,” but also the methodological rigor of task analysis, where SMEs are provided with task statements prior to inferring KSAOs. Such a blended approach might improve the quality of the competency inferences because it capitalizes on the strengths of both the task analysis and competency modeling approaches. First, information about the organization’s strategy provides SMEs with a common frame-of-reference regarding the strategic implications for the HR function, which should facilitate the process of identifying worker attributes or competencies aligned with such a strategy. Second, the information about important task statements should decrease Inferring Competencies 8 the complexity of the competency judgments required from SMEs, who would have a more concrete referent of job behaviors than that provided by a description of the organization’s strategy. The primary purpose of the studies to be presented here (see Study 1 and 3) was to systematically compare the quality of inferences in various work analysis approaches. On the basis of the research reviewed above, we expected that the quality of inferences in the blended approach would be higher than the quality of inferences in the task-based approach, which in turn would be higher than the quality of inferences in the competency modeling approach. Besides the inclusion of task information, traditional job analysis further outperforms competency modeling in terms of its methodological rigor when composing SME panels. As noted by Schippmann et al. (2000), raters barely familiar with the job are sometimes employed in competency modeling. In addition, in some competency modeling projects only a few people select the competencies deemed to be important. To date, we do not know how insufficient subject matter expertise might impact the quality of inferences made in competency modeling. In a similar vein, it is unknown how many SMEs or which type of SMEs are needed to obtain reliable competency ratings. Traditional job-analytic research might shed light on these questions. In fact, job analysis approaches have typically preferred job incumbents because the quality of their ratings is superior to that of ratings made by naïve raters (usually college students) (Cornelius, DeNisi, Blencoe, 1984; Friedman & Harvey, 1986; Voskuijl & Van Sliedregt, 2002). However, familiarity with the job such as that possessed by job incumbents might be a necessary albeit insufficient requirement for accurately determining job specifications. Specifically, it has been argued that other sources such as supervisors, HR specialists, and internal customers should probably supplement the information provided by job incumbents (Brannick & Levine, 2002; Sanchez, 2000). For instance, Hubbard et al. (1999) argued that job incumbents might have difficulty judging the relevance to their job of abstract attributes such as competencies, because many workers might have never distinguished Inferring Competencies 9 between their personal attributes and those required by their job. The literature comparing incumbent to non-incumbent ratings has also shown that, depending on the level of job satisfaction and occupational complexity, incumbent ratings do not necessarily agree with ratings from other sources (Gerhart, 1988; Sanchez, Zamora, & Viswesvaran, 1998; Sanchez, 2000; Spector & Jex, 1991). Another reason for supplementing incumbent ratings with those from other sources is that incumbents may lack sufficient foresight and knowledge of technological innovations to define strategically-aligned work requirements such as those demanded by competency modeling. In short, a second purpose of this paper was to compare the quality of inferences between a group of naïve student raters (see Study 1) and a group of experienced SMEs (incumbents, supervisors, HR specialists, and internal customers) in a real organization (see Studies 2 and 3). On the basis of the research mentioned above, we expected that the quality of inferences would be higher among a variety of job experts than among naïve raters. Overview of Studies In the following sections, we present three studies that, together, enabled us to test our predictions about the effects of subject matter expertise and task-related information. Study 1 was a laboratory study using naïve student raters wherein we compared the quality of inferences of all three work analysis approaches discussed above (i.e., task-based job analysis, competency modeling, and blended approach). Study 2 focused on competency modeling. In particular, the quality of inferences as made in competency modeling was investigated in an actual organization among different types of SMEs (i.e., incumbents, supervisors, HR specialists, and internal customers). Finally, Study 3 was a quasi-experiment wherein the competency modeling and blended approaches were compared using a group of SMEs from the same organization. Note that the jobs rated differed across these three studies. Therefore, job-specific effects might have been confounded with our manipulations. However, we felt that keeping familiarity with the job constant across studies was more important than keeping the job Inferring Competencies 10 content per se constant (see Hahn & Dipboye, 1988). Students are typically not familiar with jobs such as those employed in Study 2 and 3, and therefore their ability to rate such jobs is questionable. Therefore, jobs that were familiar to student raters were chosen in Study 1. In a similar vein, familiarity with the job served as prime criterion for including individuals as SMEs in Studies 2 and 3. Study 1 Method Study 1 compared the quality of inferences in three work analysis approaches (i.e., task-based job analysis, competency modeling, and blended approach) among student raters. Participants were 39 graduate students (31 men, 8 women; mean age = 22.1 yrs.) in Industrial and Organizational Psychology. First, they received a two-hour lecture about work analysis and a one-hour workshop about a specific competency modeling technique (see below) wherein they practiced this technique by determining the competencies of one job (assistant to the Human Resources Manager). Then, they received feedback about the competencies that were chosen by a panel of SMEs for this job. Next, their task was to determine the competencies of three jobs (accountant, executive secretary, and sales manager). In a pilot study, a similar group of 15 graduate students (11 women, 4 men; mean age = 21.4 yrs.) had rated their familiarity with these jobs on a five-point scale, ranging from 1 = I am not at all familiar with the content of this job to 5 = I am very familiar with the content of this job. Results showed that students were relatively familiar with these jobs: accountant (M = 3.40; SD = .91), executive secretary (M = 3.47; SD = .74), and sales manager (M = 3.20; SD = .86). Participants were randomly assigned to one of the following conditions. In the first condition (“competency modeling approach”), they received only a description of the business and HR strategy of the company (e.g., core values of the company). This description originated from an actual HR report of an organization. In the second condition (“task-based approach”), participants received detailed information about the tasks related to each of the three jobs. No information about the HR strategy was provided. The third Inferring Competencies 11 condition (“blended approach”) was a combination of Conditions 1 and 2. Participants received detailed information about both the tasks associated with the jobs and the business and HR strategy. Afterwards, all participants were instructed to determine independently the competencies of the three jobs using the card sorting method. The sorting and rating of the jobs lasted for approximately 2 1⁄2 hours. The Portfolio Sort Cards of the LEADERSHIP ARCHITECT® (Lominger Limited) consist of 67 cards each describing a competency according to behaviorally-anchored definitions (Lombardo & Eichinger, 2003). The Portfolio Sort Cards are a Q-sort method in which SMEs sort 67 cards (competencies) in 5 rating categories: 1 (essential for success), 2 (very important or necessary), 3 (nice to have), 4 (less important), and 5 (not important). To reduce rating inflation, the Portfolio Sort Cards limits the number of cards that can be sorted in each category. SMEs are required to provide 6 times a rating of 1, 16 times a rating of 2, 23 times a rating of 3, 16 times a rating of 4, and 6 times a rating of 5. Given this forced distribution, the competency ratings assigned by a rater to a job represent a set of ipsative scores. However, the dependency between the competency ratings was very small, as illustrated by the average correlation between ratings, which can be estimated as 1/(67-1) = .015 (Clemans, 1966; Greer & Dunlap, 1997). Nevertheless, we employed a dataanalytic approach that took ipsativity into account (VanLeeuwen & Mandabach, 2002). We chose the Portfolio Sort Cards because it is a commercially available method for competency modeling employed by organizations (see Tett, Guterman, Bleier, & Murphy, 2000). In addition, the Portfolio Sort Cards converge closely with the features typically associated with competency determination approaches as described by Schippmann et al. (2000). For example, the Portfolio Sort Cards focus on just one type of descriptor (i.e., competencies) and operationalize it with broad labels consisting of narrative definitions (cf. behavioral anchors). Another similarity with typical competency modeling approaches outlined by Schippmann et al. is that data are collected from a number of content experts. Finally, Schippmann et al. characterized the typical protocol for determining competencies Inferring Competencies 12 as a semi-structured one. The Q-sort method employed by the Portfolio Sort Cards exemplifies such a semi-structured protocol. Analyses We employed generalizability analysis (Brennan, 1992; Cronbach, Gleser, Nanda, & Rajaratnam, 1972) to understand the sources of variance in competency ratings. A key advantage of generalizability theory, in contrast to classical test theory, is that measurement error is regarded as multifaceted. Whereas classical reliability theory distinguishes only between true and error variance, generalizability theory permits the simultaneous estimation of various sources of variance. In our studies, generalizability analysis was used to estimate simultaneously the following sources of variance: competencies, jobs, raters, and their interactions. As a second advantage of generalizability analysis, the variance components can be used to estimate a generalizability coefficient, which is an intraclass correlation defined as the ratio of the universe score variance to the expected observed score variance (Brennan, 1992). This coefficient is similar to the classical reliability coefficient, although it is more accurate because multiple sources of error are taken into account. Finally, generalizability analysis allows projecting reliability estimates under different measurement conditions (Greguras & Robie, 1998). For instance, one might examine whether the reliability of competency ratings would increase when more raters are used, enabling the making of prescriptions regarding ideal measurement conditions. Prior to a generalizability analysis, the researcher typically specifies the object of measurement (i.e., universe score) and the factors (so-called facets) affecting the measurement process (Brennan, 1992). When considered in the context of this study, the variance due to raters is seen as undesirable variance (see also Dierdorff & Wilson, 2003). A large variance component due to raters suggests substantial variation in competency ratings across raters and, therefore, is indicative of low inter-rater reliability. Conversely, variance due to competencies and variance due to jobs are desirable sources of variance, because they indicate discriminant validity across competencies and jobs. As it is not possible to compute a generalizability coefficient with two objects of measurement in the analysis, we Inferring Competencies 13 followed the same strategy used in prior generalizability studies in other domains (Greguras & Robie, 1998; Greguras, Robie, Schleicher, & Goff, 2003) and conducted withincompetency generalizability analyses as well as within-job generalizability analyses. In the within-competency generalizability analyses, jobs served as the object of measurement and raters as the facet. In the within-job generalizability analyses, competencies served as the object of measurement and raters as the facet. In the within-competency generalizability analyses, the ipsative nature of the competency ratings was of no consequence because each analysis involved only one competency. However, the within-job generalizability analyses involved the full set of competency ratings that are ipsative by design and, therefore, not fully independent. To account for this dependency, we replaced the usual computational approach employed for generalizability analyses by a procedure proposed by VanLeeuwen and Mandabach (2002). For each within-job analysis, this procedure resulted in a corrected estimate of the universe score variance, which corresponded to the corrected variance component associated with the object of measurement (i.e., the competencies) and in a corrected estimate of the relative error variance, also referred to as the error variance associated with a relative decision. The generalizability coefficient was then determined in the usual way as the ratio between the estimate of the universe score variance and the sum of this estimate and the estimated relative error variance. Results and Discussion We first conducted the within-competency generalizability analyses. A total of 67 generalizability analyses were conducted, one for each competency. Table 1 presents the results collapsed across competencies, but broken down by condition. The values in the body of the table are the variance components and the mean percentages of variance explained by jobs, raters, and their interaction across analyses. As shown in Table 1, the blended approach produced on average the least variability among raters (18.94%), followed by the task-based approach (21.64%), and the competency modeling approach (22.20%). The opposite trend was apparent for the variance due to jobs. As explained before, rater Inferring Competencies 14 variance was considered undesirable because it represents unreliability, whereas job variance was considered desirable because it represents discriminant validity. There were virtually no differences among conditions in the variance explained by the interaction between jobs and raters. To examine whether these differences were statistically significant, we conducted a MANOVA using the variance components due to raters, jobs, and their interaction as dependent variables and condition as the independent variable. No significant multivariate main effect emerged F (6, 392) = .61, ns., Wilks lambda = .98. Follow-up ANOVAs per dependent variable also failed to yield significant differences among conditions. An inspection of the generalizability coefficients revealed the highest value in the blended approach (.75), followed by the task-based (.74), and competency modeling (.72) approaches. At first sight, these values appear acceptable. However, these results reflect generalizability over 13 raters (recall there were 13 raters per condition). Given that so many raters may not be available in every application, we projected the generalizability coefficient under different sets of measurement conditions (i.e., different number of raters, see lower part of Table 1). For instance, in our experience, practitioners often have access to no more than four SMEs (one supervisor, one job analyst, and two job incumbents) per job. Table 1 shows that when four raters were used, the projected generalizability coefficients barely reached .50. Whereas all prior analyses were conducted within competencies, we also conducted within-job generalizability analyses. The results were identical to those in Table 1. Two important conclusions follow from Study 1. First, the provision of task information to student raters did not produce beneficial effects in terms of increasing interrater reliability and discriminant validity among jobs. Although the blended approach performed slightly better than the task-based approach, which in turn fared better than the competency modeling approach, no statistically significant differences among the three conditions emerged. A second important conclusion is that Study 1 showed that regardless of the work analysis approach, the inferential leap from jobs to competencies might have Inferring Competencies 15 been too large for the student raters. Indeed, generalizability coefficients would never surpass the .50s if only four student raters were used. Therefore, a practical implication of Study 1 is that practitioners interested in competency modeling should be cautious about using naïve raters in their SME panels. As noted by Schippmann et al. (2000), raters barely familiar with the job are sometimes used in competency modeling. The generalizability of our results might be weakened by the use of student raters. As these students lacked the organizational context, they probably had limited interest in making ratings that had no real impact on HR applications. In addition, given that the jobs targeted (e.g., sales manager, accountant) were relatively common, it is possible that the students were influenced by shared job stereotypes that might have overshadowed the information provided in the various conditions, thereby reducing differences across conditions. Given the limitations inherent in the use of student samples and hypothetical jobs, the next two studies examined competency modeling in an actual organization as carried out by a diverse group of SMEs (i.e., incumbents, supervisors, HR specialists, and internal customers). In addition, these SMEs’ competency inferences had real impact because they affected training and development interventions in the organization. Study 2 Method In Study 2, we examined the quality of inferences in competency modeling among a diverse group of SMEs (incumbents, supervisors, HR specialists, and internal customers) in a multinational company producing specialty materials. Three jobs were selected because the organization had expressed the need to determine the competencies of these jobs as input for future training and development plans. These jobs were design and manufacturing engineer (translates orders into production process specifications such as blueprints, materials, machines needed, and standards), technical production operator (handles machines, materials, and tools by studying blueprints, selecting relevant actions, and verifying the operations to determine whether standards were met), and management Inferring Competencies 16 accountant (conducts financial analyses and provides decision-makers with this information). The Portfolio Sort Cards were used to determine job competencies. A SME panel was assembled for each of the three jobs involved. Each SME panel consisted of one representative of the following four information sources: job incumbents (two males, one female; mean age = 28.8 years; mean tenure in the organization = 3.5 years), supervisors (all male; mean age = 35.7 years; mean tenure = 5.8 years), HR specialists (two males, one female; mean age = 33.4 years; mean tenure = 6.7 years), and internal customers (colleagues) (all male; mean age = 42.4 years; mean tenure = 17.4 years). Familiarity with the focal job was the primary selection criterion for panel membership. All SMEs were also knowledgeable about the business and the HR strategies of the organization and had completed a 1⁄2 day training session that familiarized them with the Portfolio Sort Cards. This training session explained the 67 competencies, their behaviorally-anchored definitions, and the Q-sort method used. At the end of the training session, all SMEs received a manual and a set of competency sort cards. Results and Discussion Table 2 presents the results of the within-competency generalizability analyses. Raters explained 15.84% of the variance. This figure is lower than the corresponding one in Study 1 (i.e., 22.20%), suggesting there is less rater variability and therefore higher interrater reliability among experienced SMEs. In a similar vein, jobs accounted for 19.30% of the variance, which is higher than the corresponding 12.85% in Study 1. This finding suggests that experienced SMEs are better able to discriminate between different competencies across jobs than student raters are. The generalizability coefficient for the four rater types was .62, which is also higher than the corresponding generalizability coefficient of four student raters in Study 1. It would be worthwhile to know which one of the four types of raters (incumbents, supervisors, HR specialists, and internal customers) provided the most reliable and differentiated ratings. However, such within-rater type analyses were not possible because Inferring Competencies 17 we had only one rater per source. However, our data permitted an examination of differences among rater types. To this end, we repeated the previous generalizability analyses four times, excluding one of the rater types each time. For instance, we ran a generalizability analysis including incumbents, supervisors, and HR specialists, but excluding internal customers. Interestingly, the variance component associated with type of rater dropped only when internal customers were left out. This finding suggested that the ratings of internal customers were most different from the ratings of the other sources. Therefore, when the perspective of internal customers is not considered important, their exclusion may facilitate a higher inter-rater reliability. The results of the within-job generalizability analyses are shown in Table 3. Because we conducted these analyses according to the procedure proposed by VanLeeuwen and Mandabach (2002), the presentation of the results differs somewhat from the presentation of our other analyses. In particular, Table 3 indicates the estimate of the variance component of the measurement object (i.e., the competencies), the estimate of the relative error variance, and the resulting generalizability coefficient, broken down by job. There were substantial differences in the generalizability coefficients among jobs. For instance, when the four types of raters rated 67 competencies for the job of management accountant, the generalizability coefficient was .85. However, the generalizability coefficient was only .61 for the technical production operator job. The job of design and manufacturing engineer produced a generalizability coefficient of .72. Three noteworthy conclusions follow from a comparison between Study 1 and Study 2. First, the competency modeling approach yielded more acceptable levels of inter-rater reliability when the SME panel consisted of job incumbents, supervisors, internal customers, and HR specialists than when it consisted of student raters like those employed in Study 1. Second, experienced SMEs seemed better able to discriminate among the relative importance of each competency for each job than student raters in Study 1 were. Finally, competency modeling seemed to work better for some jobs than for others because withinjob generalizability coefficients varied considerably, with the lowest value found for the job Inferring Competencies 18 of technical production operator and the highest value obtained for the job of management accountant. Perhaps the Portfolio Sort Cards are better suited for describing certain jobs (i.e., managerial jobs) than others (i.e., entry-level jobs). However, the Portfolio Sort Cards consist of a large number (67) of competencies in order to increase their applicability to a wide variety of jobs. Still another explanation for the lower generalizability coefficient of the production operator is that this job might have a slightly different content across departments and employees (see also Borman, Dorsey, & Ackerman, 1992). In fact, many employees across different departments were working as production operators for this organization. Conversely, the job of management accountant was a newly defined job and there was only one management accountant in the organization. In general, the results of Study 2 are somewhat more encouraging for competency modeling than those of Study 1, because the use of a diverse panel of SMEs increased the quality of inferences. Study 1 and Study 2 employed different mechanisms (i.e., task information in Study 1 and use of different types of SMEs in Study 2) for increasing the quality of inferences. However, given the limitations of a lab study (Study 1), as well as the fact that practitioners may choose not just one but the two mechanisms, it was deemed appropriate to examine their combined effects. For this reason, Study 3 compared the quality of inferences made in a blended versus a competency modeling approach using a diverse panel of SMEs in a real organizational setting. Study 3

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Assessing the Competency of Trainers and Labor- Coaches in Education with Production outside the Work and Knowledge Centre

The aim of this research was evaluating the competencies of trainers and labor-coaches of “Education with Production” outside of the “Work and Knowledge Centre” in Kermanshah province. The research method used was survey. The statistical population consisted of 20 managers and experts from education office and technical schools, and 135 students whom they used the "Education with Productio...

متن کامل

Identify and Develop Effective Components on Competency Model of Elementary School Principals in Tehran

Purpose: The aim of the present study is Identify and develops effective components on competency model of elementary school principals in Tehran. Method: This study is combinations type that discovery in the form of mixed research. The study population included all managers and assistants Tehran elementary schools in the qualitative section using theoretical sampling 35 subjects and in quantit...

متن کامل

بررسی شایستگی‌های حرفه‌ای مدیران گروه‌های آموزشی

This article tries to study the professional competencies among the directors of educational departments from the viewpoints of the professors and directors of educational departments in Islamic Azad University – Central Tehran Branch. To this end, the theoretical fundamentals and the related literature are reviewed through library studies. Later, considering the findings, a collection of facto...

متن کامل

Teachers’ Professional Competencies: Past, Present, and Future

Teachers’ Professional Competencies: Past, Present, and Future   M. Rezaai, Ph.D.*   The purpose of this paper is to review the expected professional competencies throughout the history of teacher training in Iran. As such it covers both the past, covering the period from teacher training inception in 1918 to the Islamic revolution in 1979; and the present, since the revolution. Of course t...

متن کامل

Inferring and validating skills and competencies over time

Organizations need to accurately understand the skills and competencies of their human resources to better utilize them and more effectively respond to internal and external demands for expertise. This paper focuses on the problem of inferring and validating skills and competencies over time. In particular, it explores how the quality of often inaccurate, insufficient, and invalid skill and com...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2004